Skip to main content

03 Anatomy of a User Story – Writing Guidelines

Introduction

This document provides a framework for writing clear, consistent, and testable user stories in Jira. Whether you're a Business Analyst, Product Owner, Developer, QA Engineer, or part of an Agile team, this guide will help you create user stories that drive clarity, reduce ambiguity, and accelerate delivery.

What You'll Learn

By following this guide, you will learn:

  • ✅ The standard structure and components of a user story
  • ✅ How to write clear acceptance criteria using BDD format
  • ✅ Best practices for story sizing and splitting
  • ✅ How to identify and avoid common anti-patterns
  • ✅ Techniques for uncovering hidden requirements
  • ✅ Templates and examples you can use immediately

Key Principles

  • Clarity over complexity – Simple, clear stories reduce misunderstandings
  • Value-driven – Every story must deliver measurable business value
  • Testable – Acceptance criteria must be verifiable and unambiguous
  • Collaborative – Stories facilitate conversation, not replace it
  • Iterative – Stories improve through refinement and feedback

1. Purpose of a User Story

A user story expresses a requirement from the perspective of an end-user or business role, describing:

  • What is needed
  • Why it is needed
  • How it creates value

A good user story provides clarity, shared understanding, and a foundation for estimation, development, and testing. It will always aligns with the business needs.

User stories are not simply documentation—they are a collaboration tool. They help business analysts, product owners, developers, QA engineers, UX designers, and stakeholders align around a shared understanding.

Communication Tool, Not Just a Requirement

User stories:
Facilitate conversation
Reduce misunderstandings
Encourage incremental delivery
Promote shared accountability between business and engineering

Stories as a Planning Mechanism

They help with:
Prioritisation
Estimation (story points)
Sprint planning
Release planning
Roadmap management
Well-written stories reduce delays caused by ambiguity.

Essential Components

Every Jira user story must contain the following essential components. Together, these elements ensure the story is clear, testable, and ready for delivery. Missing or weak components increase the risk of ambiguity, rework, and defects.

ComponentPurposeFormat
User StoryCore requirement statementAs a / I want / So that
SummaryContext and backgroundFree text
Acceptance CriteriaTestable conditionsGiven/When/Then (BDD)

The goal is clarity, not verbosity—include what is needed to ensure shared understanding.

Story Flow Mapping (for analysts)

Before writing a user story, analysts should map the end-to-end flow of the requirement. Story flow mapping ensures that all functional, validation, and exception paths are considered before the story reaches refinement or development.

This approach:

  • Reduces missed requirements
  • Improves acceptance criteria quality
  • Makes dependencies visible early
  • Prevents rework during development and testing

A well-mapped flow becomes the backbone of the story description, objectives, and acceptance criteria.


Planning a User Story

When planning a user story, it is recommended to think through the requirement as an end-to-end flow before writing the story, summary, or acceptance criteria. This ensures that all functional paths, rules, and exceptions are considered early and reduces rework later.

Using both structured steps and a visual flow diagram improves clarity and consistency across stories.

Step 1: Identify the Trigger

What starts the behaviour?

The trigger defines when and why the process begins.

Common trigger types:

  • User actions (e.g. button click, form submission)
  • System events (e.g. scheduled job, file arrival)
  • External integrations (e.g. API callback, webhook)
  • Workflow transitions (e.g. status change)

Analyst prompts:

  • Is the trigger manual or automatic?
  • Can the trigger occur more than once?
  • Is the trigger idempotent?
  • Who or what initiates it?

Examples:

  • User clicks “Complete Enquiry”
  • Daily billing job starts at 02:00
  • Invoice file arrives in root folder

Step 2: Identify Required Inputs

What data is needed to proceed?

Inputs define the information required for the trigger to be processed.

Inputs may include:

  • User-entered fields
  • System-generated identifiers
  • Configuration values
  • Data from other systems
  • Files, metadata, or payloads

Analyst prompts:

  • Which inputs are mandatory vs optional?
  • What is the source of each input?
  • Are default values applied?
  • Are there format or length constraints?

Examples:

  • Enquiry ID
  • Current status
  • Billing date
  • Configured uplift percentage
  • User role and permissions

Step 3: Apply Business Rules & Validations

What conditions must be satisfied?

Business rules and validations determine whether processing can continue.

Rules can include:

  • Status validations
  • Permission checks
  • Date or time constraints
  • Regulatory or compliance rules
  • Configuration-driven logic
  • Dependency checks

Analyst prompts:

  • Under what conditions should processing stop?
  • Are rules configurable or hard-coded?
  • Are there jurisdiction-specific rules?
  • Do rules differ by role or channel?

Examples:

  • Enquiry must be in AUTHORISED or PARTIALLY_AUTHORISED state
  • User must have “Invoice Processing” permission
  • Uplift must not exceed configured maximum
  • Billing cannot occur on bank holidays

Step 4: Determine System Outputs

What does the system produce?

Outputs describe the observable results of successful processing.

Outputs may include:

  • Status updates
  • Created or updated records
  • Files or documents
  • UI messages
  • API responses
  • Notifications

Analyst prompts:

  • What is the primary output?
  • Are there secondary outputs?
  • Who consumes the output?
  • Is the output synchronous or asynchronous?

Examples:

  • Enquiry status updated to COMPLETED
  • Invoice document generated
  • Billing file saved in the correct folder
  • Success message returned to the UI

Step 5: Define the Resulting State

What does the system look like after success?

This step defines the system state once processing completes successfully.

Analyst prompts:

  • What is the new status?
  • Can the action be repeated?
  • What downstream processes are now enabled?
  • Is the entity locked or editable?

Examples:

  • Enquiry moves to COMPLETED
  • Enquiry becomes read-only
  • Invoice becomes eligible for export
  • Next workflow step is triggered

Step 6: Identify Exceptions & Failure Paths

What happens when things go wrong?

Exceptions must be explicitly captured to avoid ambiguity.

Exception types include:

  • Validation failures
  • Missing or invalid inputs
  • External system failures
  • Timeouts
  • Duplicate processing
  • Partial success scenarios

Analyst prompts:

  • What happens if a rule fails?
  • How is the failure communicated?
  • Should processing stop or continue?
  • Is retry required or allowed?

Examples:

  • Invalid status → reject action with error message
  • Billing folder missing → create or fail
  • Duplicate invoice detected → stop processing
  • External API unavailable → retry or queue

Step 7: Define Logging & Audit Requirements

What must be recorded?

Logging ensures traceability, supportability, and compliance.

Define:

  • Events to be logged
  • Log levels (info, warning, error)
  • Mandatory data points
  • Correlation or trace IDs

Analyst prompts:

  • Is audit logging required?
  • What details are mandatory?
  • Who will use these logs?
  • Are logs needed for compliance or reporting?

Examples:

  • Enquiry ID
  • Previous and new status
  • Timestamp
  • User or system actor
  • Error details (if applicable)

Visual Flow

The following diagram represents the standard planning flow for a user story.

How Story Flow Mapping Feeds the User Story

Once the flow is mapped:

  • Trigger + Input → inform the Description
  • Rules + Exceptions → become Acceptance Criteria
  • Output + Next State → define Expected Outcomes
  • Logging → ensures audit and support readiness

This method ensures:

  • No missing acceptance criteria
  • Clear developer intent
  • Strong test coverage
  • Predictable system behaviour

2. Identifying the goal

This section defines the core objective of the user story. Using the standard As a / I want / So that format ensures the requirement is written from the correct perspective, focuses on user intent rather than implementation, and clearly states the business value.

This structure helps teams align on who the change is for, what needs to be done, and why it matters. A well-defined goal provides a strong foundation for writing the summary, acceptance criteria, and test cases that follow.

As a [role]
I want [action]
So that [value]

Guidelines for Each Line

AS A… (Role)

  • The role must be a real person or meaningful system actor.
  • Use functional roles: e.g., "IPP invoice processor", "Fleet Manager", "Customer", "System Administrator".
  • Avoid vague roles like "User" unless unavoidable.

💡 Tip: If you're struggling to identify the role, ask: "Who benefits most from this functionality?"

I WANT… (Action/Functionality)

  • Describe what the system must do.
  • Keep it concise and precise.
  • Avoid technical implementation details.

💡 Tip: Focus on the "what" not the "how". If you find yourself describing technical details, consider createing a technical story.

SO THAT… (Value/Outcome)

  • Explain the reason or business value.
  • State why the functionality is needed.
  • Should tie to a measurable or meaningful outcome.

💡 Tip: If you can't articulate the value, the story may not be needed or may need refinement.

✅ Good examples

Clear role, clear action, clear value

As a Fleet Manager,
I want to export vehicle compliance reports,
so that I can share them with external auditors.

User-focused, not solution-focused

As a Customer,
I want to reset my password using my email,
so that I can regain access without contacting support.

Small, testable, valuable

As a Finance Admin,
I want to apply an uplift percentage to invoices,
so that the system automatically adjusts the final amount.

Role-specific, measurable outcome

As a Driver,
I want to receive reminders before my MOT expires,
so that I never miss a compliance deadline.

Avoids technical detail and describes value

As a System Admin,
I want to view all failed login attempts,
so that I can quickly identify potential security risks.

❌ Bad examples

Missing value

As a user,
I want a new dashboard.

Why? What is the benefit? What problem does it solve?

Too technical / solution-heavy

As a developer,
I want a new SQL table for storing jobs,
so that we can normalise the schema.

User stories should not describe implementation details.

Too vague

As a driver,
I want the system to work better,
so that things are easier.

Lacks clarity, scope, and measurability.

Huge / Epics disguised as stories

As an Administrator,
I want to manage all user settings,
so that I can control the system.

Too broad; needs breaking into smaller stories.

No real user role

As a system,
I want to process data faster,
so that performance improves.

Systems do not have roles; only people do.

Vague or Ambiguous Language

❌ Bad Examples

"I want it to work better"
"I want improved performance"
"I want a user-friendly interface"

✅ Good Examples:

"I want response times under 2 seconds"
"I want to complete the task in 3 clicks"
"I want clear error messages with actionable guidance"


3. Summary Section

The Summary Section provides a concise, plain-English overview of what the user story is expected to achieve. It acts as a quick reference for developers, testers, and stakeholders to understand the intent of the story without reading the full description or acceptance criteria.

This section should answer the question:

“What will be different once this story is delivered?”

Purpose of the Summary Section

  • Communicates intent at a glance
  • Helps reviewers quickly understand scope
  • Supports faster refinement and estimation
  • Reduces misinterpretation of detailed acceptance criteria
  • Acts as a sanity check before development begins

Guidelines for Writing the Summary

  • Use simple, non-technical language
  • Focus on outcomes, not implementation
  • Keep each bullet one clear objective
  • Avoid edge cases and exceptions (covered in ACs)
  • Typically include 3–6 high-level bullet points
  • Ensure alignment with the As a / I want / So that statement

What to Include

  • Primary system behaviour
  • Key state or status changes
  • Critical constraints or invariants
  • High-level artefacts created or updated
  • Important business rules that must not be violated

What to Avoid

  • Technical design or solution details
  • Database, API, or UI implementation specifics
  • Error handling scenarios (covered in Acceptance Criteria)
  • Overlapping or duplicate points

Example Summary

  • Using the enquiry ID, progress the enquiry to COMPLETED status when all conditions are met
  • Ensure all required workflow triggers execute without being skipped
  • Maintain the existing folder structure when creating billing artefacts
  • Generate the billing document only once per completed enquiry
  • Capture required audit and log information for traceability

Quality Checklist

A good summary should:

  • Be readable in under 30 seconds
  • Clearly describe the end result
  • Match the acceptance criteria
  • Contain no technical jargon
  • Reflect business intent

Analyst Tip

If the summary cannot be clearly written in 3–6 bullet points, the story is likely:

  • Too large
  • Poorly defined
  • Missing clarity

In such cases, revisit the scope or split the story before refinement.


5. Acceptance Criteria (ACs)

Acceptance criteria translate the user story into testable, verifiable conditions using BDD (Given/When/Then). They define the boundaries of the story and act as the single source of truth for development, testing, and validation.

Well-written ACs reduce ambiguity, prevent scope creep, and ensure a shared understanding across analysts, developers, testers, and stakeholders.

Why BDD Format is Essential

BDD (Behaviour-Driven Development) format encourages:

  • Human-readable scenarios – Understandable by non-technical stakeholders
  • System-focused behaviour – Describes what the system does, not how
  • Test automation alignment – Can be directly translated to automated tests
  • Reproducible outcomes – Clear, unambiguous test conditions

🎯 Key Principle: Each AC should be independently testable and map directly to a test case.

BDD Format

KeywordPurposeExample
GivenPreconditions or starting state"Given an enquiry is found"
WhenTrigger or action performed"When the enquiry is in AUTHORISED status"
ThenExpected system output or change"Then update the status to COMPLETED"
AndAdditional conditions or outcomes"And all pre-conditions are met"

ACs must:

  • ✅ Be testable
  • ✅ Be unambiguous
  • ✅ Be written from the system's behaviour point of view
  • ✅ Cover positive, negative, and edge cases where appropriate

Best Practices

  • Number each AC – Use sequential numbering (1, 2, 3, etc.)
  • Use auto-numbering – Do not manually type numbers
  • Have a line break between ACs – Improves readability and reviewability
  • One behaviour per AC – Do not combine multiple expectations
  • Include visuals – Use diagrams, screenshots, or flow charts when text alone is insufficient. Visuals help explain complex workflows, integrations, or state transitions and should support (not replace) ACs.
  • 1-to-1 mapping – Each AC must map to exactly one test case
  • Be specific – Avoid vague terms like properly, correctly, fast, or secure

Examples of AC Writing Patterns

AC1 – Progressing to Completed
  1. Given an enquiry is found
    When the enquiry is in AUTHORISED or PARTIALLY_AUTHORISED status
    And all pre-conditions are met
    Then update the status of the enquiry to COMPLETED
AC2 – Billing document Creation
  1. Given the enquiry is progressed
    When the new status is COMPLETED status
    Then the billing document should be added to the correct billing folder structure
AC3 – Log Entry Capture
  1. Given the enquiry is progressed to COMPLETED
    When the log entry is added
    Then it should contain the required details

7. How to Split Large User Stories

When a story is too large for one sprint, use these splitting techniques:

TechniqueDescriptionExample
Workflow stepsSplit by process stagesStart → Validate → Complete
CRUD operationsSplit by data operationsCreate, Read, Update, Delete
Happy path vs exceptionsSplit normal flow from error handlingHappy path first, then error cases
Data variationsSplit by different data typesDifferent customer types, regions
Devices/platformsSplit by platformWeb, Mobile, API
User rolesSplit by different user typesAdmin vs User, Manager vs Employee

Example: "Progress enquiry to completed" can be split into:

  1. Story 1: Validate preconditions before status update
  2. Story 2: Update enquiry status to COMPLETED
  3. Story 3: Create billing file in correct structure
  4. Story 4: Log status updates with required details

💡 Tip: Start with the happy path, then add exception handling and edge cases in subsequent stories.


Large Epic Example: Manage Enquiries

Split stories into:

  • Create enquiry
  • View enquiry
  • Edit enquiry
  • Delete enquiry
  • Upload attachments
  • Search enquiries

Each becomes its own story, improving clarity and planning.


8. How to Identify Hidden Requirements

Often user stories hide complexity or dependencies. Before finalising a story, use this checklist:

System Considerations

  • Are there system limits? (e.g., date rules, rate limits, file size limits)
  • Are there performance requirements? (response times, throughput)
  • Are there security requirements? (authentication, authorisation, data encryption)
  • Are there scalability considerations? (concurrent users, data volume)

Business Rules

  • Are there business rules not explicitly documented?
  • Are there compliance requirements? (GDPR, audit trails)
  • Are there approval workflows?
  • Are there notification requirements?

Integration Points

  • Are workflows dependent on external APIs?
  • Are there third-party system integrations?
  • Are there data synchronization requirements?
  • Are there webhook or callback requirements?

Reporting & Analytics

  • Are there reporting impacts?
  • Do analytics or metrics need to be tracked?
  • Are there dashboard updates required?

Audit & Logging

  • Are there audit trails that need updating?
  • What level of logging is required?
  • Are there monitoring or alerting requirements?

💡 Tip: Review similar past stories to identify patterns of hidden requirements.


9. Conclusion

A well-structured user story enables clarity, reduces rework, and ensures developers, testers, and stakeholders share the same understanding. Following these guidelines standardises the quality of Jira stories across teams and accelerates delivery.